323 research outputs found

    On the Goodness-of-Fit Tests for Some Continuous Time Processes

    Full text link
    We present a review of several results concerning the construction of the Cramer-von Mises and Kolmogorov-Smirnov type goodness-of-fit tests for continuous time processes. As the models we take a stochastic differential equation with small noise, ergodic diffusion process, Poisson process and self-exciting point processes. For every model we propose the tests which provide the asymptotic size α\alpha and discuss the behaviour of the power function under local alternatives. The results of numerical simulations of the tests are presented.Comment: 22 pages, 2 figure

    Joint PDF modelling of turbulent flow and dispersion in an urban street canyon

    Full text link
    The joint probability density function (PDF) of turbulent velocity and concentration of a passive scalar in an urban street canyon is computed using a newly developed particle-in-cell Monte Carlo method. Compared to moment closures, the PDF methodology provides the full one-point one-time PDF of the underlying fields containing all higher moments and correlations. The small-scale mixing of the scalar released from a concentrated source at the street level is modelled by the interaction by exchange with the conditional mean (IECM) model, with a micro-mixing time scale designed for geometrically complex settings. The boundary layer along no-slip walls (building sides and tops) is fully resolved using an elliptic relaxation technique, which captures the high anisotropy and inhomogeneity of the Reynolds stress tensor in these regions. A less computationally intensive technique based on wall functions to represent boundary layers and its effect on the solution are also explored. The calculated statistics are compared to experimental data and large-eddy simulation. The present work can be considered as the first example of computation of the full joint PDF of velocity and a transported passive scalar in an urban setting. The methodology proves successful in providing high level statistical information on the turbulence and pollutant concentration fields in complex urban scenarios.Comment: Accepted in Boundary-Layer Meteorology, Feb. 19, 200

    Coverage, Continuity and Visual Cortical Architecture

    Get PDF
    The primary visual cortex of many mammals contains a continuous representation of visual space, with a roughly repetitive aperiodic map of orientation preferences superimposed. It was recently found that orientation preference maps (OPMs) obey statistical laws which are apparently invariant among species widely separated in eutherian evolution. Here, we examine whether one of the most prominent models for the optimization of cortical maps, the elastic net (EN) model, can reproduce this common design. The EN model generates representations which optimally trade of stimulus space coverage and map continuity. While this model has been used in numerous studies, no analytical results about the precise layout of the predicted OPMs have been obtained so far. We present a mathematical approach to analytically calculate the cortical representations predicted by the EN model for the joint mapping of stimulus position and orientation. We find that in all previously studied regimes, predicted OPM layouts are perfectly periodic. An unbiased search through the EN parameter space identifies a novel regime of aperiodic OPMs with pinwheel densities lower than found in experiments. In an extreme limit, aperiodic OPMs quantitatively resembling experimental observations emerge. Stabilization of these layouts results from strong nonlocal interactions rather than from a coverage-continuity-compromise. Our results demonstrate that optimization models for stimulus representations dominated by nonlocal suppressive interactions are in principle capable of correctly predicting the common OPM design. They question that visual cortical feature representations can be explained by a coverage-continuity-compromise.Comment: 100 pages, including an Appendix, 21 + 7 figure

    Dosimetric evaluation and radioimmunotherapy of anti-tumour multivalent Fab́ fragments

    Get PDF
    We have been investigating the use of cross-linked divalent (DFM) and trivalent (TFM) versions of the anti-carcinoembryonic antigen (CEA) monoclonal antibody A5B7 as possible alternatives to the parent forms (IgG and F(ab́)2) which have been used previously in clinical radioimmunotherapy (RIT) studies in colorectal carcinoma. Comparative biodistribution studies of similar sized DFM and F(ab́)2 and TFM and IgG, radiolabelled with both 131I and 90Y have been described previously using the human colorectal tumour LS174T nude mouse xenograft model (Casey et al (1996) Br J Cancer 74: 1397–1405). In this study quantitative estimates of radiation distribution and RIT in the xenograft model provided more insight into selecting the most suitable combination for future RIT. Radiation doses were significantly higher in all tissues when antibodies were labelled with 90Y. Major contributing organs were the kidneys, liver and spleen. The extremely high absorbed dose to the kidneys on injection of 90Y-labelled DFM and F(ab́)2 as a result of accumulation of the radiometal would result in extremely high toxicity. These combinations are clearly unsuitable for RIT. Cumulative dose of 90Y-TFM to the kidney was 3 times lower than the divalent forms but still twice as high as for 90Y-IgG. TFM clears faster from the blood than IgG, producing higher tumour to blood ratios. Therefore when considering only the tumour to blood ratios of the total absorbed dose, the data suggests that TFM would be the most suitable candidate. However, when corrected for equitoxic blood levels, doses to normal tissues for TFM were approximately twice the level of IgG, producing a two-fold increase in the overall tumour to normal tissue ratio. In addition RIT revealed that for a similar level of toxicity and half the administered activity, 90Y-IgG produced a greater therapeutic response. This suggests that the most promising A5B7 antibody form with the radionuclide 90Y may be IgG. Dosimetry analysis revealed that the tumour to normal tissue ratios were greater for all 131I-labelled antibodies. This suggests that 131I may be a more suitable radionuclide for RIT, in terms of lower toxicity to normal tissues. The highest tumour to blood dose and tumour to normal tissue ratio at equitoxic blood levels was 131I-labelled DFM, suggesting that 131I-DFM may be best combination of antibody and radionuclide for A5B7. The dosimetry estimates were in agreement with RIT results in that twice the activity of 131I-DFM must be administered to produce a similar therapeutic effect as 131I-TFM. The toxicity in this therapy experiment was minimal and further experiments at higher doses are required to observe if there would be any advantage of a higher initial dose rate for 131I-DFM. © 1999 Cancer Research Campaig

    Vaccine candidates derived from a novel infectious cDNA clone of an American genotype dengue virus type 2

    Get PDF
    BACKGROUND: A dengue virus type 2 (DEN-2 Tonga/74) isolated from a 1974 epidemic was characterized by mild illness and belongs to the American genotype of DEN-2 viruses. To prepare a vaccine candidate, a previously described 30 nucleotide deletion (Δ30) in the 3' untranslated region of DEN-4 has been engineered into the DEN-2 isolate. METHODS: A full-length cDNA clone was generated from the DEN-2 virus and used to produce recombinant DEN-2 (rDEN-2) and rDEN2Δ30. Viruses were evaluated for replication in SCID mice transplanted with human hepatoma cells (SCID-HuH-7 mice), in mosquitoes, and in rhesus monkeys. Neutralizing antibody induction and protective efficacy were also assessed in rhesus monkeys. RESULTS: The rDEN2Δ30 virus was ten-fold reduced in replication in SCID-HuH-7 mice when compared to the parent virus. The rDEN-2 viruses were not infectious for Aedes mosquitoes, but both readily infected Toxorynchites mosquitoes. In rhesus monkeys, rDEN2Δ30 appeared to be slightly attenuated when compared to the parent virus as measured by duration and peak of viremia and neutralizing antibody induction. A derivative of rDEN2Δ30, designated rDEN2Δ30-4995, was generated by incorporation of a point mutation previously identified in the NS3 gene of DEN-4 and was found to be more attenuated than rDEN2Δ30 in SCID-HuH-7 mice. CONCLUSIONS: The rDEN2Δ30 and rDEN2Δ30-4995 viruses can be considered for evaluation in humans and for inclusion in a tetravalent dengue vaccine

    Grammar-based distance in progressive multiple sequence alignment

    Get PDF
    Background: We propose a multiple sequence alignment (MSA) algorithm and compare the alignment-quality and execution-time of the proposed algorithm with that of existing algorithms. The proposed progressive alignment algorithm uses a grammar-based distance metric to determine the order in which biological sequences are to be pairwise aligned. The progressive alignment occurs via pairwise aligning new sequences with an ensemble of the sequences previously aligned. Results: The performance of the proposed algorithm is validated via comparison to popular progressive multiple alignment approaches, ClustalW and T-Coffee, and to the more recently developed algorithms MAFFT, MUSCLE, Kalign, and PSAlign using the BAliBASE 3.0 database of amino acid alignment files and a set of longer sequences generated by Rose software. The proposed algorithm has successfully built multiple alignments comparable to other programs with significant improvements in running time. The results are especially striking for large datasets. Conclusion: We introduce a computationally efficient progressive alignment algorithm using a grammar based sequence distance particularly useful in aligning large datasets

    Design Considerations for Massively Parallel Sequencing Studies of Complex Human Disease

    Get PDF
    Massively Parallel Sequencing (MPS) allows sequencing of entire exomes and genomes to now be done at reasonable cost, and its utility for identifying genes responsible for rare Mendelian disorders has been demonstrated. However, for a complex disease, study designs need to accommodate substantial degrees of locus, allelic, and phenotypic heterogeneity, as well as complex relationships between genotype and phenotype. Such considerations include careful selection of samples for sequencing and a well-developed strategy for identifying the few “true” disease susceptibility genes from among the many irrelevant genes that will be found to harbor rare variants. To examine these issues we have performed simulation-based analyses in order to compare several strategies for MPS sequencing in complex disease. Factors examined include genetic architecture, sample size, number and relationship of individuals selected for sequencing, and a variety of filters based on variant type, multiple observations of genes and concordance of genetic variants within pedigrees. A two-stage design was assumed where genes from the MPS analysis of high-risk families are evaluated in a secondary screening phase of a larger set of probands with more modest family histories. Designs were evaluated using a cost function that assumes the cost of sequencing the whole exome is 400 times that of sequencing a single candidate gene. Results indicate that while requiring variants to be identified in multiple pedigrees and/or in multiple individuals in the same pedigree are effective strategies for reducing false positives, there is a danger of over-filtering so that most true susceptibility genes are missed. In most cases, sequencing more than two individuals per pedigree results in reduced power without any benefit in terms of reduced overall cost. Further, our results suggest that although no single strategy is optimal, simulations can provide important guidelines for study design

    In-training assessment using direct observation of single-patient encounters: a literature review

    Get PDF
    We reviewed the literature on instruments for work-based assessment in single clinical encounters, such as the mini-clinical evaluation exercise (mini-CEX), and examined differences between these instruments in characteristics and feasibility, reliability, validity and educational effect. A PubMed search of the literature published before 8 January 2009 yielded 39 articles dealing with 18 different assessment instruments. One researcher extracted data on the characteristics of the instruments and two researchers extracted data on feasibility, reliability, validity and educational effect. Instruments are predominantly formative. Feasibility is generally deemed good and assessor training occurs sparsely but is considered crucial for successful implementation. Acceptable reliability can be achieved with 10 encounters. The validity of many instruments is not investigated, but the validity of the mini-CEX and the ‘clinical evaluation exercise’ is supported by strong and significant correlations with other valid assessment instruments. The evidence from the few studies on educational effects is not very convincing. The reports on clinical assessment instruments for single work-based encounters are generally positive, but supporting evidence is sparse. Feasibility of instruments seems to be good and reliability requires a minimum of 10 encounters, but no clear conclusions emerge on other aspects. Studies on assessor and learner training and studies examining effects beyond ‘happiness data’ are badly needed
    corecore